596 research outputs found

    Fast traffic sign recognition using color segmentation and deep convolutional networks

    Get PDF
    The use of Computer Vision techniques for the automatic recognition of road signs is fundamental for the development of intelli- gent vehicles and advanced driver assistance systems. In this paper, we describe a procedure based on color segmentation, Histogram of Ori- ented Gradients (HOG), and Convolutional Neural Networks (CNN) for detecting and classifying road signs. Detection is speeded up by a pre- processing step to reduce the search space, while classication is carried out by using a Deep Learning technique. A quantitative evaluation of the proposed approach has been conducted on the well-known German Traf- c Sign data set and on the novel Data set of Italian Trac Signs (DITS), which is publicly available and contains challenging sequences captured in adverse weather conditions and in an urban scenario at night-time. Experimental results demonstrate the eectiveness of the proposed ap- proach in terms of both classication accuracy and computational speed

    Plane extraction for indoor place recognition

    Get PDF
    In this paper, we present an image based plane extraction method well suited for real-time operations. Our approach exploits the assumption that the surrounding scene is mainly composed by planes disposed in known directions. Planes are detected from a single image exploiting a voting scheme that takes into account the vanishing lines. Then, candidate planes are validated and merged using a region grow- ing based approach to detect in real-time planes inside an unknown in- door environment. Using the related plane homographies is possible to remove the perspective distortion, enabling standard place recognition algorithms to work in an invariant point of view setup. Quantitative Ex- periments performed with real world images show the effectiveness of our approach compared with a very popular method

    DOP: Deep Optimistic Planning with Approximate Value Function Evaluation

    Get PDF
    Research on reinforcement learning has demonstrated promising results in manifold applications and domains. Still, efficiently learning effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. multi-agent systems or hyper-redundant robots). To alleviate this problem, we present DOP, a deep model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) plan effective policies. Specifically, we exploit deep neural networks to learn Q-functions that are used to attack the curse of dimensionality during a Monte-Carlo tree search. Our algorithm, in fact, constructs upper confidence bounds on the learned value function to select actions optimistically. We implement and evaluate DOP on different scenarios: (1) a cooperative navigation problem, (2) a fetching task for a 7-DOF KUKA robot, and (3) a human-robot handover with a humanoid robot (both in simulation and real). The obtained results show the effectiveness of DOP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance

    Q-CP: Learning Action Values for Cooperative Planning

    Get PDF
    Research on multi-robot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyper-redundant and groups of robot). To alleviate this problem, we present Q-CP a cooperative model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies. Specifically, we exploit Q-learning to attack the curse-of-dimensionality in the iterations of a Monte-Carlo Tree Search. We implement and evaluate Q-CP on different stochastic cooperative (general-sum) games: (1) a simple cooperative navigation problem among 3 robots, (2) a cooperation scenario between a pair of KUKA YouBots performing hand-overs, and (3) a coordination task between two mobile robots entering a door. The obtained results show the effectiveness of Q-CP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance

    Language-based sensing descriptors for robot object grounding

    Get PDF
    In this work, we consider an autonomous robot that is required to understand commands given by a human through natural language. Specifically, we assume that this robot is provided with an internal representation of the environment. However, such a representation is unknown to the user. In this context, we address the problem of allowing a human to understand the robot internal representation through dialog. To this end, we introduce the concept of sensing descriptors. Such representations are used by the robot to recognize unknown object properties in the given commands and warn the user about them. Additionally, we show how these properties can be learned over time by leveraging past interactions in order to enhance the grounding capabilities of the robot

    Teaching robots parametrized executable plans through spoken interaction

    Get PDF
    While operating in domestic environments, robots will necessarily face difficulties not envisioned by their developers at programming time. Moreover, the tasks to be performed by a robot will often have to be specialized and/or adapted to the needs of specific users and specific environments. Hence, learning how to operate by interacting with the user seems a key enabling feature to support the introduction of robots in everyday environments. In this paper we contribute a novel approach for learning, through the interaction with the user, task descriptions that are defined as a combination of primitive actions. The proposed approach makes a significant step forward by making task descriptions parametric with respect to domain specific semantic categories. Moreover, by mapping the task representation into a task representation language, we are able to express complex execution paradigms and to revise the learned tasks in a high-level fashion. The approach is evaluated in multiple practical applications with a service robot

    Effective Target Aware Visual Navigation for UAVs

    Full text link
    In this paper we propose an effective vision-based navigation method that allows a multirotor vehicle to simultaneously reach a desired goal pose in the environment while constantly facing a target object or landmark. Standard techniques such as Position-Based Visual Servoing (PBVS) and Image-Based Visual Servoing (IBVS) in some cases (e.g., while the multirotor is performing fast maneuvers) do not allow to constantly maintain the line of sight with a target of interest. Instead, we compute the optimal trajectory by solving a non-linear optimization problem that minimizes the target re-projection error while meeting the UAV's dynamic constraints. The desired trajectory is then tracked by means of a real-time Non-linear Model Predictive Controller (NMPC): this implicitly allows the multirotor to satisfy both the required constraints. We successfully evaluate the proposed approach in many real and simulated experiments, making an exhaustive comparison with a standard approach.Comment: Conference paper at "European Conference on Mobile Robotics" (ECMR) 201

    New Neutral Gauge Bosons and New Heavy Fermions in the Light of the New LEP Data

    Get PDF
    We derive limits on a class of new physics effects that are naturally present in grand unified theories based on extended gauge groups, and in particular in E6E_6 and SO(10)SO(10) models. We concentrate on ii) the effects of the mixing of new neutral gauge bosons with the standard Z0Z_0; iiii) the effects of a mixing of the known fermions with new heavy states. We perform a global analysis including all the LEP data on the ZZ decay widths and asymmetries collected until 1993, the SLC measurement of the left--right asymmetry, the measurement of the WW boson mass, various charged current constraints, and the low energy neutral current experiments. We use a top mass value in the range announced by CDF. We derive limits on the Z0Z_0--Z1Z_1 mixing, which are always \lsim 0.01 and are at the level of a few {\it per mille} if some specific model is assumed. Model-dependent theoretical relations between the mixing and the mass of the new gauge boson in most cases require MZ′>1 M_{Z'} > 1\,TeV. Limits on light--heavy fermion mixings are also largely improved with respect to previous analyses, and are particularly relevant for a class of models that we discuss.Comment: 12 pages (including two tables), revised version, accepted for publication in Phys. Lett. B. Includes a discussion of the m_t and alpha_s dependence of the bounds on the Z' mass and the fermion mixing

    Does terrain slope really dominate goal searching?

    Get PDF
    If you can locate a target by using one reliable source of information, why would you use an unreliable one? A similar question has been faced in a recent study on homing pigeons, in which, despite the presence of better predictors of the goal location, the slope of the floor in an arena dominated the searching process. This piece of evidence seems to contradict straightforward accounts of associative learning, according to which behavior should be controlled by the stimulus that best predicts the reward, and has fueled interest toward one question that, to date, has received scarce attention in the field of spatial cognition: how are vertical spaces represented? The purpose of this communication is to briefly review the studies on this issue, trying to determine whether slope is a special cue—driving behavior irrespective of other cues—or simply a very salient one

    Does terrain slope really dominate goal searching?

    Get PDF
    If you can locate a target by using one reliable source of information, why would you use an unreliable one? A similar question has been faced in a recent study on homing pigeons, in which, despite the presence of better predictors of the goal location, the slope of the floor in an arena dominated the searching process. This piece of evidence seems to contradict straightforward accounts of associative learning, according to which behavior should be controlled by the stimulus that best predicts the reward, and has fueled interest toward one question that, to date, has received scarce attention in the field of spatial cognition: how are vertical spaces represented? The purpose of this communication is to briefly review the studies on this issue, trying to determine whether slope is a special cue—driving behavior irrespective of other cues—or simply a very salient one
    • …
    corecore